Skip to main content

Explanation-by-Example Based on Item Response Theory

  • Conference paper
  • First Online:
Intelligent Systems (BRACIS 2022)

Abstract

Intelligent systems that use Machine Learning classification algorithms are increasingly common in everyday society. However, many systems use black-box models that do not have characteristics that allow for self-explanation of their predictions. This situation leads researchers in the field and society to the following question: How can I trust the prediction of a model I cannot understand? In this sense, XAI emerges as a field of AI that aims to create techniques capable of explaining the decisions of the classifier to the end-user. As a result, several techniques have emerged, such as Explanation-by-Example, which has a few initiatives consolidated by the community currently working with XAI. This research explores the Item Response Theory (IRT) as a tool to explaining the models and measuring the level of reliability of the Explanation-by-Example approach. To this end, four datasets with different levels of complexity were used, and the Random Forest model was used as a hypothesis test. From the test set, 83.8% of the errors are from instances in which the IRT points out the model as unreliable.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Model-Agnostic: it does not depend on the type of model to be explained [18].

  2. 2.

    All results can be accessed at: https://github.com/LucasFerraroCardoso/IRT_XAI.

References

  1. Abdi, H., Valentin, D.: Multiple correspondence analysis. Encycl. Meas. Stat. 2(4), 651–657 (2007)

    Google Scholar 

  2. Arrieta, A.B., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020)

    Article  Google Scholar 

  3. Baker, F.B.: The basics of item response theory (2001). http://ericae.net/irt/baker

  4. Biggio, B., Roli, F.: Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 84, 317–331 (2018)

    Article  Google Scholar 

  5. Cardoso, L.F.F., Santos, V.C.A., Francês, R.S.K., Prudêncio, R.B.C., Alves, R.C.O.: Decoding machine learning benchmarks. In: Cerri, R., Prati, R.C. (eds.) BRACIS 2020. LNCS (LNAI), vol. 12320, pp. 412–425. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-61380-8_28

    Chapter  Google Scholar 

  6. Chicco, D., Jurman, G.: The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation. BMC Genomics 21(1), 1–13 (2020)

    Article  Google Scholar 

  7. Geirhos, R., et al.: Shortcut learning in deep neural networks. Nat. Mach. Intel. 2(11), 665–673 (2020)

    Article  Google Scholar 

  8. Gilpin, L.H., et al.: Explaining explanations: an overview of interpretability of machine learning. In: 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA). IEEE (2018)

    Google Scholar 

  9. Gohel, P., Singh, P., Mohanty, M.: Explainable AI: current status and future directions. arXiv preprint arXiv:2107.07045 (2021)

  10. Guidotti, R., et al.: A survey of methods for explaining black box models. ACM Comput. Sur. (CSUR) 51(5), 1–42 (2018)

    Google Scholar 

  11. Gunning, D., Aha, D.: DARPA’s explainable artificial intelligence (XAI) program. AI Mag. 40(2), 44–58 (2019)

    Google Scholar 

  12. Rousseeuw, P.J.: Silhouettes: a graphical aid to the interpretation and validation of cluster analysis. J. Comput. Appl. Math. 20, 53–65 (1987)

    Article  MATH  Google Scholar 

  13. Kim, B., Rajiv K., Koyejo, O.O.: Examples are not enough, learn to criticize! criticism for interpretability. In: Advances in Neural Information Processing Systems 29 (2016)

    Google Scholar 

  14. Koh, P.W., Liang, P.: Understanding black-box predictions via influence functions. In: International Conference on Machine Learning. PMLR (2017)

    Google Scholar 

  15. Linardatos, P., Papastefanopoulos, V., Kotsiantis, S.: Explainable AI: a review of machine learning interpretability methods. Entropy 23(1), 18 (2020)

    Article  Google Scholar 

  16. Martínez-Plumed, F., et al.: Item response theory in AI: Analysing machine learning classifiers at the instance level. Artifi. Intel. 271, 18–42 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  17. Molnar, C.: Interpretable machine learning (2020). Lulu.com

  18. Molnar, C., Casalicchio, G., Bischl, B.: Interpretable machine learning – a brief history, state-of-the-art and challenges. In: Koprinska, I., et al. (eds.) ECML PKDD 2020. CCIS, vol. 1323, pp. 417–431. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-65965-3_28

    Chapter  Google Scholar 

  19. Naiseh, M., et al.: Explainable recommendation: when design meets trust calibration. World Wide Web 24(5), 1857–1884 (2021)

    Article  Google Scholar 

  20. Regulation, P.: General data protection regulation (GDPR). Intersoft Consulting. Accessed October 24 Jan 2018

    Google Scholar 

  21. Pedregosa, F., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  22. Ribeiro, J., et al.: Does dataset complexity matters for model explainers?. In: 2021 IEEE International Conference on Big Data (Big Data). IEEE (2021)

    Google Scholar 

  23. Sabatine, M.S., Cannon, C.P.: Approach to the patient with chest pain. In: Braunwald’s Heart Disease: A Textbook of Cardiovascular Medicine. 9th edn., pp. 1076–1086. Elsevier/Saunders, Philadelphia (2012)

    Google Scholar 

  24. Vanschoren, J., et al.: OpenML: networked science in machine learning. ACM SIGKDD Explor. Newsl. 15(2), 49–60 (2014)

    Article  Google Scholar 

  25. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lucas F. F. Cardoso .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Cardoso, L.F.F. et al. (2022). Explanation-by-Example Based on Item Response Theory. In: Xavier-Junior, J.C., Rios, R.A. (eds) Intelligent Systems. BRACIS 2022. Lecture Notes in Computer Science(), vol 13653. Springer, Cham. https://doi.org/10.1007/978-3-031-21686-2_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21686-2_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21685-5

  • Online ISBN: 978-3-031-21686-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics